A method to perform offline and online speaker diarization for an unlimited number of speakers is described in this paper. End-to-end neural diarization (EEND) has achieved overlap-aware speaker diarization by formulating it as a multi-label classification problem. It has also been extended for a flexible number of speakers by introducing speaker-wise attractors. However, the output number of speakers of attractor-based EEND is empirically capped; it cannot deal with cases where the number of speakers appearing during inference is higher than that during training because its speaker counting is trained in a fully supervised manner. Our method, EEND-GLA, solves this problem by introducing unsupervised clustering into attractor-based EEND. In the method, the input audio is first divided into short blocks, then attractor-based diarization is performed for each block, and finally, the results of each block are clustered on the basis of the similarity between locally-calculated attractors. While the number of output speakers is limited within each block, the total number of speakers estimated for the entire input can be higher than the limitation. To use EEND-GLA in an online manner, our method also extends the speaker-tracing buffer, which was originally proposed to enable online inference of conventional EEND. We introduce a block-wise buffer update to make the speaker-tracing buffer compatible with EEND-GLA. Finally, to improve online diarization, our method improves the buffer update method and revisits the variable chunk-size training of EEND. The experimental results demonstrate that EEND-GLA can perform speaker diarization of an unseen number of speakers in both offline and online inferences.
translated by 谷歌翻译
拟声术语是语音上模仿声音的字符序列,在表达声音的特征,诸如持续时间,间距和Timbre的特征是有效的。我们提出了一种使用拟声缺陷的环境 - 辐射方法,以指定要提取的目标声音。利用这种方法,我们通过使用U-Net架构来估计来自输入混合谱图和拟声型的时频掩模,然后通过掩蔽频谱图来提取相应的目标声音。实验结果表明,该方法只能提取对应于拟声病的目标声音,并且比使用声音事件类别指定目标声音的传统方法更好地执行。
translated by 谷歌翻译
This study targets the mixed-integer black-box optimization (MI-BBO) problem where continuous and integer variables should be optimized simultaneously. The CMA-ES, our focus in this study, is a population-based stochastic search method that samples solution candidates from a multivariate Gaussian distribution (MGD), which shows excellent performance in continuous BBO. The parameters of MGD, mean and (co)variance, are updated based on the evaluation value of candidate solutions in the CMA-ES. If the CMA-ES is applied to the MI-BBO with straightforward discretization, however, the variance corresponding to the integer variables becomes much smaller than the granularity of the discretization before reaching the optimal solution, which leads to the stagnation of the optimization. In particular, when binary variables are included in the problem, this stagnation more likely occurs because the granularity of the discretization becomes wider, and the existing modification to the CMA-ES does not address this stagnation. To overcome these limitations, we propose a simple extension of the CMA-ES based on lower-bounding the marginal probabilities associated with the generation of integer variables in the MGD. The numerical experiments on the MI-BBO benchmark problems demonstrate the efficiency and robustness of the proposed method. Furthermore, in order to demonstrate the generality of the idea of the proposed method, in addition to the single-objective optimization case, we incorporate it into multi-objective CMA-ES and verify its performance on bi-objective mixed-integer benchmark problems.
translated by 谷歌翻译
Image super-resolution is a common task on mobile and IoT devices, where one often needs to upscale and enhance low-resolution images and video frames. While numerous solutions have been proposed for this problem in the past, they are usually not compatible with low-power mobile NPUs having many computational and memory constraints. In this Mobile AI challenge, we address this problem and propose the participants to design an efficient quantized image super-resolution solution that can demonstrate a real-time performance on mobile NPUs. The participants were provided with the DIV2K dataset and trained INT8 models to do a high-quality 3X image upscaling. The runtime of all models was evaluated on the Synaptics VS680 Smart Home board with a dedicated edge NPU capable of accelerating quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 60 FPS rate when reconstructing Full HD resolution images. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译
临床文本的自动汇总可以减轻医疗专业人员的负担。 “放电摘要”是摘要的一种有希望的应用,因为它们可以从每日住院记录中产生。我们的初步实验表明,放电摘要中有20-31%的描述与住院记录的内容重叠。但是,目前尚不清楚如何从非结构化来源生成摘要。为了分解医师的摘要过程,本研究旨在确定摘要中的最佳粒度。我们首先定义了具有不同粒度的三种摘要单元,以比较放电摘要生成的性能:整个句子,临床段和条款。我们在这项研究中定义了临床细分,旨在表达最小的医学意义概念。为了获得临床细分,有必要在管道的第一阶段自动拆分文本。因此,我们比较了基于规则的方法和一种机器学习方法,而后者在分裂任务中以0.846的F1得分优于构造者。接下来,我们在日本的多机构国家健康记录上,使用三种类型的单元(基于Rouge-1指标)测量了提取性摘要的准确性。使用整个句子,临床段和条款分别为31.91、36.15和25.18的提取性摘要的测量精度分别为31.91、36.15和25.18。我们发现,临床细分的准确性比句子和条款更高。该结果表明,住院记录的汇总需要比面向句子的处理更精细的粒度。尽管我们仅使用日本健康记录,但可以解释如下:医生从患者记录中提取“具有医学意义的概念”并重新组合它们...
translated by 谷歌翻译
本文提出了一种用于拆分计算的神经体系结构搜索(NAS)方法。拆分计算是一种新兴的机器学习推理技术,可解决在物联网系统中部署深度学习的隐私和延迟挑战。在拆分计算中,神经网络模型通过网络使用Edge服务器和IoT设备进行了分离和合作处理。因此,神经网络模型的体系结构显着影响通信有效载荷大小,模型准确性和计算负载。在本文中,我们解决了优化神经网络体系结构以进行拆分计算的挑战。为此,我们提出了NASC,该NASC共同探讨了最佳模型架构和一个拆分点,以达到延迟需求(即,计算和通信的总延迟较小,都比某个阈值较小)。 NASC采用单发NAS,不需要重复模型培训进行计算高效的体系结构搜索。我们使用硬件(HW) - 基准数据的NAS基础的绩效评估表明,拟议的NASC可以改善``通信潜伏期和模型准确性''的权衡,即,将延迟降低了约40-60%,从基线降低了约40-60%有轻微的精度降解。
translated by 谷歌翻译
神经体系结构搜索(NAS)旨在自动化体系结构设计过程并改善深神经网络的性能。平台感知的NAS方法同时考虑性能和复杂性,并且可以找到具有低计算资源的表现良好的体系结构。尽管普通的NAS方法由于模型培训的重复而导致了巨大的计算成本,但在搜索过程中,训练包含所有候选架构的超级网的权重训练了一杆NAS,据报道会导致搜索成本较低。这项研究着重于体系结构复杂性的单发NAS,该NA优化了由两个指标的加权总和组成的目标函数,例如预测性能和参数数量。在现有方法中,必须使用加权总和的不同系数多次运行架构搜索过程,以获得具有不同复杂性的多个体系结构。这项研究旨在降低与寻找多个体系结构相关的搜索成本。提出的方法使用多个分布来生成具有不同复杂性的体系结构,并使用基于重要性采样的多个分布获得的样本来更新每个分布。提出的方法使我们能够在单个体系结构搜索中获得具有不同复杂性的多个体系结构,从而降低了搜索成本。所提出的方法应用于CIAFR-10和Imagenet数据集上卷积神经网络的体系结构搜索。因此,与基线方法相比,提出的方法发现了多个复杂性不同的架构,同时需要减少计算工作。
translated by 谷歌翻译
我们提出了一个多路相似性的理论框架,与将实价数据建模为通过光谱嵌入聚类的超图。对于基于图形的光谱群集,通常,通过使用内核函数对成对相似性进行建模,将实值数据模拟为图。这是因为内核函数与图形切割具有理论连接。对于使用多路相似性比成对相似性更合适的问题,自然地将模型作为超图,即图形的概括。然而,尽管剪切幅度进行了充分研究,但尚未建立基于HyperGraph Cut的框架来模拟多路相似性。在本文中,我们通过利用内核函数的理论基础来制定多路相似性。我们展示了我们的配方和超图之间的理论联系,以两种方式削减了加权内核$ k $ -MEANS和热核,我们证明了我们的配方合理性。我们还为光谱聚类提供了快速算法。我们的算法在经验上比现有图和其他启发式建模方法显示出更好的性能。
translated by 谷歌翻译
本文提出了一种用于端到端现场文本识别的新颖培训方法。端到端的场景文本识别提供高识别精度,尤其是在使用基于变压器的编码器 - 解码器模型时。要培训高度准确的端到端模型,我们需要为目标语言准备一个大型图像到文本配对数据集。但是,很难收集这些数据,特别是对于资源差的语言。为了克服这种困难,我们所提出的方法利用富裕的大型数据集,以资源丰富的语言,如英语,培训资源差的编码器解码器模型。我们的主要思想是建立一个模型,其中编码器反映了多种语言的知识,而解码器专门从事资源差的语言。为此,所提出的方法通过使用组合资源贫乏语言数据集和资源丰富的语言数据集的多语言数据集来预先培训编码器,以学习用于场景文本识别的语言不变知识。所提出的方法还通过使用资源贫乏语言的数据集预先列举解码器,使解码器更适合资源较差的语言。使用小型公共数据集进行日本现场文本识别的实验证明了该方法的有效性。
translated by 谷歌翻译
本文提出了一种用于对话序列标记的新型知识蒸馏方法。对话序列标签是监督的学习任务,估计目标对话文档中每个话语的标签,并且对于许多诸如对话法估计的许多应用是有用的。准确的标签通常通过分层结构化的大型模型来实现,这些大型模型组成的话语级和对话级网络,分别捕获话语内和话语之间的上下文。但是,由于其型号大小,因此无法在资源受限设备上部署此类模型。为了克服这种困难,我们专注于通过蒸馏了大型和高性能教师模型的知识来列举一个小型模型的知识蒸馏。我们的主要思想是蒸馏知识,同时保持教师模型捕获的复杂环境。为此,所提出的方法,等级知识蒸馏,通过蒸馏来列举小型模型,而不是通过培训模型在教师模型中培训的话语水平和对话级环境的知识模拟教师模型在每个级别的输出。对话法案估算和呼叫场景分割的实验证明了该方法的有效性。
translated by 谷歌翻译